On the Utility Gain of Iterative Bayesian Update for Locally Differentially Private Mechanisms

نویسندگان

چکیده

This paper investigates the utility gain of using Iterative Bayesian Update (IBU) for private discrete distribution estimation data obfuscated with Locally Differentially Private (LDP) mechanisms. We compare performance IBU to Matrix Inversion (MI), a standard technique, seven LDP mechanisms designed one-time collection and other multiple collections (e.g., RAPPOR). To broaden scope our study, we also varied metric, number users n, domain size k, privacy parameter $$\epsilon $$ , both synthetic real-world data. Our results suggest that can be useful post-processing tool improving in different scenarios without any additional cost. For instance, experiments show provide better than MI, especially high regimes (i.e., when is small). provides insights practitioners use conjunction existing more accurate privacy-preserving analysis. Finally, implemented all fourteen into state-of-the-art multi-freq-ldpy Python package ( https://pypi.org/project/multi-freq-ldpy/ ) open-sourced code used as tutorials.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimizing Locally Differentially Private Protocols

Protocols satisfying Local Differential Privacy (LDP) enable parties to collect aggregate information about a population while protecting each user’s privacy, without relying on a trusted third party. LDP protocols (such as Google’s RAPPOR) have been deployed in real-world scenarios. In these protocols, a user encodes his private information and perturbs the encoded value locally before sending...

متن کامل

Differentially Private Bayesian Optimization

Bayesian optimization is a powerful tool for finetuning the hyper-parameters of a wide variety of machine learning models. The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems. As machine learning becomes commonplace, Bayesian optimization becomes an attractive method for practitioners to automate the process of classif...

متن کامل

Locally Differentially Private Protocols for Frequency Estimation

Protocols satisfying Local Differential Privacy (LDP) enable parties to collect aggregate information about a population while protecting each user’s privacy, without relying on a trusted third party. LDP protocols (such as Google’s RAPPOR) have been deployed in real-world scenarios. In these protocols, a user encodes his private information and perturbs the encoded value locally before sending...

متن کامل

Postprocessing for Iterative Differentially Private Algorithms

Iterative algorithms for differential privacy run for a fixed number of iterations, where each iteration learns some information from data and produces an intermediate output. However, the algorithm only releases the output of the last iteration, and from which the accuracy of algorithm is judged. In this paper, we propose a postprocessing algorithm that seeks to improve the accuracy by incorpo...

متن کامل

Differentially private Bayesian learning on distributed data

Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results, but the proposed algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a dis...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-37586-6_11